Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow onnx DML backend #3072

Merged
merged 8 commits into from
Jan 4, 2025
Merged

Allow onnx DML backend #3072

merged 8 commits into from
Jan 4, 2025

Conversation

FNsi
Copy link
Contributor

@FNsi FNsi commented Jan 4, 2025

An accident caused last pull request closed....it's my fault.
This is only the 'control V' from the closed one #3045

@FNsi
Copy link
Contributor Author

FNsi commented Jan 4, 2025

See detail #3045

Copy link
Member

@joeyballentine joeyballentine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

@joeyballentine joeyballentine merged commit f230ce3 into chaiNNer-org:main Jan 4, 2025
28 checks passed
@FNsi
Copy link
Contributor Author

FNsi commented Jan 5, 2025

Thanks

😁

Should I say you're welcome?
But I'm only a User.

So ,
Thank you
Haha

@Kim2091
Copy link
Collaborator

Kim2091 commented Jan 9, 2025

@FNsi I'm on the latest nightly build which includes this. DML is not showing up as an option with all default dependencies installed.

Manually installing onnxruntime-directml==1.17.1 in chaiNNer's python environment does not allow it to show in the settings menu either. TensorRT, CUDA, and CPU are the only options.

Uninstalling other ONNX related libraries and only installing directml allows it to appear, but of course it won't function like that.

@FNsi

This comment has been minimized.

@FNsi
Copy link
Contributor Author

FNsi commented Jan 9, 2025

@Kim2091
Quick fix in my mind, add DML in default provider search.

In source:
backend/src/packages/chaiNNer_onnx/settings.py

Direct fix in installed location:

/chiNNer/resources/src/packages/chaiNNer_onnx/settings.py



def get_providers():
    providers = cast(List[str], ort.get_available_providers())

    default = providers[0]
    if "DmlExecutionProvider" in providers:
        default = "DmlExecutionProvider"
    elif "CUDAExecutionProvider" in providers:
        default = "CUDAExecutionProvider"
    elif "CPUExecutionProvider" in providers:
        default = "CPUExecutionProvider"

    return providers, default


def get_provider_label(identifier: str) -> str:
    label = identifier.replace("ExecutionProvider", "")
    if label.lower() == "tensorrt":
        label = "TensorRT"
    return label


execution_providers, default_provider = get_providers()

That should display the DML and cuda at same time. test needed since I don't have NV.

@Kim2091
Copy link
Collaborator

Kim2091 commented Jan 9, 2025

@FNsi Thank you for the suggestion. I'm unable to compile chaiNNer right now, but I did try the fix in the final install, but to no avail.

@Kim2091
Copy link
Collaborator

Kim2091 commented Jan 9, 2025

This may be an issue with ORT itself. I printed the available providers with the default chaiNNer ONNX packages + the ONNX DML package:

print(providers)
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

@FNsi
Copy link
Contributor Author

FNsi commented Jan 9, 2025

print(providers)
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

Also I found out people in other repo compliant about that 😅😅, I think you are right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants